Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We introduce a novel framework called REIN: Reliability Estimation by learning an Importance sampling (IS) distribution with Normalizing flows (NFs). The NFs learn probability space maps that transform the probability distribution of the input random variables into a quasi-optimal IS distribution. NFs stack together invertible neural networks to construct differentiable bijections with efficiently computed Jacobian determinants. The NF 'pushes forward' a realization from the input probability distribution into a realization from the IS distribution, with importance weights calculated using the change of variables formula. We also propose a loss function to learn a NF map that minimizes the reverse Kullback-Leibler divergence between the 'pushforward' distribution and a sequentially updated target distribution obtained by modifying the optimal IS distribution. We demonstrate REIN's efficacy on a set of benchmark problems that feature very low failure rates, multiple failure modes and high dimensionality, while comparing against other variance reduction methods. We also consider two simple applications, the reliability analyses of a thirty-four story building and a cantilever tube, to demonstrate the applicability of REIN to practical problems of interest. As compared to other methods, REIN is shown to be useful for high-dimensional reliability estimation problems with very small failure probabilities.more » « less
-
We propose a novel modular inference approach combining two different generative models — generative adversarial networks (GAN) and normalizing flows — to approximate the posterior distribution of physics-based Bayesian inverse problems framed in high-dimensional ambient spaces. We dub the proposed framework GAN-Flow. The proposed method leverages the intrinsic dimension reduction and superior sample generation capabilities of GANs to define a low-dimensional data-driven prior distribution. Once a trained GAN-prior is available, the inverse problem is solved entirely in the latent space of the GAN using variational Bayesian inference with normalizing flow-based variational distribution, which approximates low-dimensional posterior distribution by transforming realizations from the low-dimensional latent prior (Gaussian) to corresponding realizations of a low-dimensional variational posterior distribution. The trained GAN generator then maps realizations from this approximate posterior distribution in the latent space back to the high-dimensional ambient space. We also propose a two-stage training strategy for GAN-Flow wherein we train the two generative models sequentially. Thereafter, GAN-Flow can estimate the statistics of posterior-predictive quantities of interest at virtually no additional computational cost. The synergy between the two types of generative models allows us to overcome many challenges associated with the application of Bayesian inference to large-scale inverse problems, chief among which are describing an informative prior and sampling from the high-dimensional posterior. GAN-Flow does not involve Markov chain Monte Carlo simulation, making it particularly suitable for solving large-scale inverse problems. We demonstrate the efficacy and flexibility of GAN-Flow on various physics-based inverse problems of varying ambient dimensionality and prior knowledge using different types of GANs and normalizing flows. Notably, one of the applications we consider involves a 65,536-dimensional inverse problem of phase retrieval wherein an object is reconstructed from sparse noisy measurements of the magnitude of its Fourier transform.more » « less
-
The objective of this work is to provide a Bayesian re-interpretation to model falsification. We show that model falsification can be viewed as an approximate Bayesian computation (ABC) approach when hypotheses (models) are sampled from a prior. To achieve this, we recast model falsifiers as discrepancy metrics and density kernels such that they may be adopted within ABC and generalized ABC (GABC) methods. We call the resulting frameworks model falsified ABC and GABC, respectively. Moreover, as a result of our reinterpretation, the set of unfalsified models can be shown to be realizations of an approximate posterior. We consider both error and likelihood domain model falsification in our exposition. Model falsified (G)ABC is used to tackle two practical inverse problems albeit with synthetic measurements. The first type of problem concerns parameter estimation and includes applications of ABC to the inference of a statistical model where the likelihood can be difficult to compute, and the identification of a cubic-quintic dynamical system. The second type of example involves model selection for the base isolation system of a four degree-of-freedom base isolated structure. The performance of model falsified ABC and GABC are compared with Bayesian inference. The results show that model falsified (G)ABC can be used to solve inverse problems in a computationally efficient manner. The results are also used to compare the various falsifiers in their capability of approximating the posterior and some of its important statistics. Further, we show that model falsifier based density kernels can be used in kernel regression to infer unknown model parameters and compute structural responses under epistemic uncertainty.more » « less
An official website of the United States government
